# Multi-GPU Training

Bart Large Teaser De V2
Large German text processing model based on the BART architecture, suitable for various natural language processing tasks
Large Language Model Transformers
B
bettertextapp
123
0
Lightblue Reranker 0.5 Cont Filt Gguf
A text ranking model fine-tuned based on Qwen2.5-0.5B-Instruct, suitable for information retrieval and relevance ranking tasks
Large Language Model
L
RichardErkhov
2,130
0
Donut 240202
MIT
A document understanding model fine-tuned from Yazawa/donut-base-sroie, suitable for structured document information extraction tasks
Text Recognition Transformers
D
Yazawa
93
0
Quan 1.8b Chat
Other
Qwen-1.8B model fine-tuned on English-Vietnamese bilingual data using ChatML prompt template format
Large Language Model Transformers
Q
qnguyen3
31
11
Sikong Llama 7b Chinese
Model fine-tuned on custom datasets based on Linly-Chinese-LLaMA-7b-hf
Large Language Model Transformers
S
SikongSphere
55
4
Whisper Large V2 Hi V3
Apache-2.0
Hindi speech recognition model fine-tuned based on OpenAI Whisper Large-v2, achieving a word error rate of 11.3% on the Common Voice 11.0 Hindi test set
Speech Recognition Transformers Other
W
anuragshas
21
1
Albert Xxl V2 Finetuned Squad
Apache-2.0
A question answering model fine-tuned on the SQuAD dataset based on ALBERT-xxlarge-v2
Question Answering System Transformers
A
anas-awadalla
15
1
Wav2vec2 Common Voice Tr Demo Dist
Apache-2.0
This model is an automatic speech recognition (ASR) model fine-tuned on the COMMON_VOICE - TR Turkish dataset based on facebook/wav2vec2-large-xlsr-53, achieving a word error rate of 0.3242 on the evaluation set.
Speech Recognition Transformers Other
W
cromz22
26
0
Xtreme S Xlsr 300m Voxpopuli En
Apache-2.0
This model is a fine-tuned speech recognition model based on facebook/wav2vec2-xls-r-300m on the GOOGLE/XTREME_S - VOXPOPULI.EN dataset, supporting English speech-to-text tasks.
Speech Recognition Transformers English
X
anton-l
28
0
Xtreme S Xlsr Minds14
Apache-2.0
This model is a speech processing model fine-tuned from facebook/wav2vec2-xls-r-300m, achieving high F1 scores and accuracy on the evaluation dataset.
Speech Recognition Transformers
X
anton-l
25
1
Albert Xlarge V2 Squad V2
A question answering system based on the ALBERT-xlarge-v2 model fine-tuned on the SQuAD V2 dataset
Question Answering System Transformers
A
ktrapeznikov
104
2
Wav2vec2 2 Bart Base
A speech recognition model fine-tuned on the LibriSpeech ASR clean dataset, based on wav2vec2-base and bart-base
Speech Recognition Transformers
W
patrickvonplaten
493
5
Sew D Mid 400k Librispeech Clean 100h Ft
Apache-2.0
This model is an automatic speech recognition model fine-tuned from asapp/sew-d-mid-400k on the LIBRISPEECH_ASR - CLEAN dataset, achieving a word error rate (WER) of 1.0536 on the evaluation set.
Speech Recognition Transformers
S
patrickvonplaten
15
1
Wav2vec2 Librispeech Clean 100h Demo Dist
Apache-2.0
A speech recognition model fine-tuned on the LIBRISPEECH_ASR-CLEAN dataset based on facebook/wav2vec2-large-lv60
Speech Recognition Transformers
W
patrickvonplaten
15
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase